Pre-training Without Natural Images
نویسندگان
چکیده
Is it possible to use convolutional neural networks pre-trained without any natural images assist image understanding? The paper proposes a novel concept, Formula-driven Supervised Learning. We automatically generate patterns and their category labels by assigning fractals, which are based on law existing in the background knowledge of real world. Theoretically, generated instead pre-training phase allows us an infinite scale dataset labeled images. Although models with proposed Fractal DataBase (FractalDB), database images, does not necessarily outperform human annotated datasets at all settings, we able partially surpass accuracy ImageNet/Places models. representation FractalDB captures unique feature visualization layers attentions.
منابع مشابه
Language Generation with Recurrent Generative Adversarial Networks without Pre-training
Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for text generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximumlikelihood or used convolutional networks for generation. In this work, ...
متن کاملAnnotation of Online Shopping Images without Labeled Training Examples
We are interested in the task of image annotation using noisy natural text as training data. An image and its caption convey different information, but are generated by the same underlying concepts. In this paper, we learn latent mixtures of topics that generate image and product descriptions on shopping websites by adapting a topic model for multilingual data (Mimno et al., 2009). We use the t...
متن کاملPre-training Attention Mechanisms
Recurrent neural networks with differentiable attention mechanisms have had success in generative and classification tasks. We show that the classification performance of such models can be enhanced by guiding a randomly initialized model to attend to salient regions of the input in early training iterations. We further show that, if explicit heuristics for guidance are unavailable, a model tha...
متن کاملKnowledge Transfer Pre-training
Pre-training is crucial for learning deep neural networks. Most of existing pre-training methods train simple models (e.g., restricted Boltzmann machines) and then stack them layer by layer to form the deep structure. This layerwise pre-training has found strong theoretical foundation and broad empirical support. However, it is not easy to employ such method to pre-train models without a clear ...
متن کاملLearning without Training
Achieving high-level skills is generally considered to require intense training, which is thought to optimally engage neuronal plasticity mechanisms. Recent work, however, suggests that intensive training may not be necessary for skill learning. Skills can be effectively acquired by a complementary approach in which the learning occurs in response to mere exposure to repetitive sensory stimulat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2021
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-69544-6_35